91 research outputs found

    A Note on Teaching Binomial Confidence Intervals

    Get PDF
    For constructing confidence intervals for a binomial proportion pp, Simon (1996, Teaching Statistics) advocates teaching one of two large-sample alternatives to the usual zz-intervals p^±1.96×S.E(p^)\hat{p} \pm 1.96 \times S.E(\hat{p}) where S.E.(p^)=p^×(1p^)/nS.E.(\hat{p}) = \sqrt{ \hat{p} \times (1 - \hat{p})/n}. His recommendation is based on the comparison of the closeness of the achieved coverage of each system of intervals to their nominal level. This teaching note shows that a different alternative to zz-intervals, called qq-intervals, are strongly preferred to either method recommended by Simon. First, qq-intervals are more easily motivated than even zz-intervals because they require only a straightforward application of the Central Limit Theorem (without the need to estimate the variance of p^\hat{p} and to justify that this perturbation does not affect the normal limiting distribution). Second, qq-intervals do not involve ad-hoc continuity corrections as do the proposals in Simon. Third, qq-intervals have substantially superior achieved coverage than either system recommended by Simon

    These Aren\u27t Your Mothers and Fathers Experiments (Abstract)

    Get PDF
    Informal experimentation is as old as humankind. Statisticians became seriously involved in the conduct of experiments during the early 1900s when they devised methods for the design of efficient field trials to improve agricultural yields. During the 1900s statistical methodology was developed for many complicated sampling settings and a wide variety of design objectives

    Screening Procedures to Identify Robust Product or Process Designs Using Fractional Factorial Experiments

    Get PDF
    In many quality improvement experiments, there are one or more ``control'' factors that can be modified to determine a final product design or manufacturing process, and one or more ``environmental'' (or `` noise'') factors that vary under field or manufacturing conditions. In many applications, the product design or process design is considered seriously flawed if its performance is poor for any level of the environmental factor. For example, if a particular prosthetic heart valve design has poor fluid flow characteristics for certain flow rates, then a manufacturer will not want to put this design into production. Thus this paper considers cases when it is appropriate to measure a product's quality to be its {\em worst} performance over the levels of the environmental factor. We consider the frequently occurring case of combined-array experiments and extend the subset selection methodology of Gupta (1956, 1965) to provide statistical screening procedures to identify product designs that maximize the worst case performance of the design over the environmental conditions for such experiments. A case study is provided to illustrate the proposed procedures

    Selection and Screening Procedures to Determine Optimal Product Designs. (REVISED, April 1997)

    Get PDF
    To compare several promising product designs, manufacturers must measure their performance under multiple environmental conditions. In many applications, a product design is considered to be seriously flawed if its performance is poor under any level of the environmental factor. For example, if a particular automobile battery design does not function well under some temperature conditions, then a manufacturer may not want to put this design into production. Thus, in this paper we consider the overall measure of a given product's quality to be its worst performance over the environmental levels. We develop statistical procedures to identify (a near) the optimal product design among a given set of product designs, i.e., the manufacturing design associated with the greatest overall measure of performance. We accomplish this for intuitive procedures based on the split-plot experimental design (and the randomized complete block design as a special case); split-plot designs have the essential structure of a product array and the practical convenience of local randomization. Two classes of statistical procedures are provided. In the first, the delta-best formulation of selection problems, we determine the number of replications of the basic split-plot design that are needed to guarantee, with a given confidence level, the selection of a product design whose minimum performance is within a specified amount, delta, of the performance of the optimal product design. In particular, if the difference between the quality of the best and 2nd best manufacturing designs is delta or more, then the procedure guarantees that the best design will be selected with specified probability. For applications where a split-plot experiment involving several product designs has been completed without the planning required of the delta-best formulation, we provide procedures to construct a "confidence subset" of the manufacturing designs; the selected subset contains the optimal product design with a prespecified confidence level. The latter is called the subset selection formulation of selection problems. Examples are provided to illustrate the procedures

    The Use of Subset Selection in Combined Array Experiments to Determine Optimal Product or Process Designs. (REVISED, June 1997)

    Get PDF
    A number of authors in the quality control literature have advocated the use of combined-arrays in screening experiments to identify robust product or process designs [Shoemaker, Tsui, and Wu (1991); Nair et al. (1992); Myers, Khuri, and Vining (1992), for example]. This paper considers a product manufacturing or process design setting in which there are several factors under the control of the manufacturer, called control settings, and other environmental (noise) factors that that vary under field or manufacturing conditions. We show how Gupta's subset selection philosophy can be used in such a quality improvement setting to identify combinations of the levels of the control factors that correspond either to products that are robust to environmental variations during their use or to processes that fabricate items whose quality is independent of the variations in the raw materials used in their manufacture. [Gupta (1956, 1965)]

    09181 Abstracts Collection -- Sampling-based Optimization in the Presence of Uncertainty

    Get PDF
    This Dagstuhl seminar brought together researchers from statistical ranking and selection; experimental design and response-surface modeling; stochastic programming; approximate dynamic programming; optimal learning; and the design and analysis of computer experiments with the goal of attaining a much better mutual understanding of the commonalities and differences of the various approaches to sampling-based optimization, and to take first steps toward an overarching theory, encompassing many of the topics above

    Testing for Activation in Data from FMRI Experiments

    Get PDF
    Abstract: The traditional method for processing functional magnetic resonance imaging (FMRI) data is based on a voxel-wise, general linear model. For experiments conducted using a block design, where periods of activation are interspersed with periods of rest, a haemodynamic response function (HRF) is convolved with the design function and, for each voxel, the convolution is regressed on prewhitened data. An initial analysis of the data often involves computing voxel-wise two-sample t-tests, which avoids a direct specification of the HRF. Assuming only the length of the haemodynamic delay is known, scans acquired in transition periods between activation and rest are omitted, and the two-sample t-test is used to compare mean levels during activation versus mean levels during rest. However, the validity of the two-sample t-test is based on the assumption that the data are Gaussian with equal variances. In this article, we consider the Wilcoxon rank test as well as modified versions of the classical t-test that correct for departures from these assumptions. The relative performance of the tests are assessed by applying them to simulated data and comparing their size and power; one of the modified tests (the CW test) is shown to be superior

    A modular analysis of the Auxin signalling network

    Get PDF
    Auxin is essential for plant development from embryogenesis onwards. Auxin acts in large part through regulation of transcription. The proteins acting in the signalling pathway regulating transcription downstream of auxin have been identified as well as the interactions between these proteins, thus identifying the topology of this network implicating 54 Auxin Response Factor (ARF) and Aux/IAA (IAA) transcriptional regulators. Here, we study the auxin signalling pathway by means of mathematical modeling at the single cell level. We proceed analytically, by considering the role played by five functional modules into which the auxin pathway can be decomposed: the sequestration of ARF by IAA, the transcriptional repression by IAA, the dimer formation amongst ARFs and IAAs, the feedback loop on IAA and the auxin induced degradation of IAA proteins. Focusing on these modules allows assessing their function within the dynamics of auxin signalling. One key outcome of this analysis is that there are both specific and overlapping functions between all the major modules of the signaling pathway. This suggests a combinatorial function of the modules in optimizing the speed and amplitude of auxin-induced transcription. Our work allows identifying potential functions for homo- and hetero-dimerization of transcriptional regulators, with ARF:IAA, IAA:IAA and ARF:ARF dimerization respectively controlling the amplitude, speed and sensitivity of the response and a synergistic effect of the interaction of IAA with transcriptional repressors on these characteristics of the signaling pathway. Finally, we also suggest experiments which might allow disentangling the structure of the auxin signaling pathway and analysing further its function in plants
    corecore